Skip to content

Conversation

@sujay-d07
Copy link
Collaborator

@sujay-d07 sujay-d07 commented Dec 27, 2025

Related Issue

Closes #380

Summary

Removed the redundant --offline flag from the CLI as it duplicates existing offline functionality:

  • Semantic cache already provides automatic offline capability for cached requests (no flag needed)
  • CORTEX_PROVIDER=ollama enables true offline operation with local LLM

Changes

  • Removed --offline CLI argument and related code from cortex/cli.py
  • Removed offline parameter from CommandInterpreter and AskHandler
  • Updated documentation (docs/COMMANDS.md, docs/ISSUE-268-TESTING.md)
  • Added breaking change notice to CHANGELOG.md

Migration

Users who were using --offline should:

  • For cached responses: Just use cortex install <package> (cache works automatically)
  • For true offline: Set export CORTEX_PROVIDER=ollama

Checklist

  • Tests pass (pytest tests/) - 847 passed, 10 skipped
  • MVP label added if closing MVP issue
  • Update "Cortex -h" - --offline removed from help output

Summary by CodeRabbit

Release Notes

  • Breaking Changes

    • Removed the --offline flag from CLI. Offline functionality now requires setting CORTEX_PROVIDER=ollama environment variable.
  • Documentation

    • Updated command documentation to remove offline option references.
    • Testing guides revised to focus on semantic cache verification and Ollama-based offline workflows.
  • Tests

    • Removed offline mode test cases and updated fallback behavior validations.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings December 27, 2025 11:42
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 27, 2025

📝 Walkthrough

Walkthrough

This PR removes the --offline flag and associated offline-only execution paths from the CLI, core request handlers, and test suite. Users can still leverage offline functionality through semantic caching (automatic) or by setting CORTEX_PROVIDER=ollama for local LLM operation. Documentation and tests have been updated to reflect this simplification.

Changes

Cohort / File(s) Summary
Core offline removal
cortex/ask.py, cortex/cli.py, cortex/llm/interpreter.py
Removed offline parameter from AskHandler.__init__ and CommandInterpreter.__init__; deleted CLI --offline flag and offline attribute; removed offline-mode guards that previously raised errors when no cached response existed.
Documentation updates
docs/COMMANDS.md, docs/ISSUE-268-TESTING.md
Removed --offline flag from command reference; refocused test guide from offline-mode validation to semantic cache hit verification; added Ollama-based offline workflow as alternative using CORTEX_PROVIDER=ollama.
Changelog
CHANGELOG.md
Documented breaking change: removal of --offline flag with guidance on offline alternatives (caching and Ollama provider).
Test updates
tests/test_ask.py, tests/test_cli.py
Removed test_ask_offline_no_cache test; updated test_install_no_api_key to mock Ollama unavailability fallback behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #299 — Directly conflicts; undoes prior work that added offline support and --offline flag to same constructors and CLI flow.
  • PR #362 — Modifies AskHandler in cortex/ask.py for offline parameter handling; this PR removes those offline-specific changes.
  • PR #192 — Modifies CommandInterpreter constructor signature in cortex/llm/interpreter.py; this PR continues structural changes to that class.

Suggested labels

MVP

Suggested reviewers

  • mikejmorgan-ai
  • dhvll

Poem

🐰 The offline flag hops away today,
No more confusion in Cortex's way!
Cache and Ollama—two paths, clean and bright,
Simplicity wins; the design feels just right! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Refactor: Remove offline mode support and update documentation' accurately and concisely summarizes the main change: removal of the --offline flag and related documentation updates.
Description check ✅ Passed The PR description includes the related issue (#380), a clear summary of changes, specific files modified, migration guidance for users, and a completed checklist with test results.
Linked Issues check ✅ Passed The PR successfully implements all coding requirements from issue #380: removes --offline flag from CLI and AskHandler/CommandInterpreter, updates documentation, and adds breaking change notice.
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #380 objectives. Test modifications in test_cli.py align with removing offline support and validating Ollama fallback behavior.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/test_cli.py (1)

62-69: Consider a more descriptive test name and additional assertions.

The test correctly simulates Ollama unavailability when no API key is set and verifies the failure path. However, the test could be improved for clarity and completeness:

  1. Test name: test_install_no_api_key doesn't clearly convey that it's testing the Ollama fallback failure scenario. Consider renaming to test_install_no_api_key_ollama_unavailable.

  2. Error message verification: The test only checks the return code but doesn't verify that an appropriate error message is shown to the user. Consider adding an assertion to check that the mocked error message is properly handled.

Suggested improvements
     @patch.dict(os.environ, {}, clear=True)
-    def test_install_no_api_key(self):
+    def test_install_no_api_key_ollama_unavailable(self):
         # When no API key is set, the CLI falls back to Ollama.
         # If Ollama is running, this should succeed. If not, it should fail.
         # We'll mock Ollama to be unavailable to test the failure case.
         with patch("cortex.llm.interpreter.CommandInterpreter.parse") as mock_parse:
             mock_parse.side_effect = RuntimeError("Ollama not available")
             result = self.cli.install("docker")
             self.assertEqual(result, 1)
+            # Verify parse was called (indicating Ollama fallback was attempted)
+            mock_parse.assert_called_once()
docs/ISSUE-268-TESTING.md (1)

56-68: Add Ollama installation prerequisite for Test 4.

Test 4 describes using Ollama for true offline operation, but the prerequisites section (lines 5-14) doesn't mention that Ollama needs to be installed on the system. Users attempting Test 4 without Ollama installed will encounter errors.

Suggested addition to prerequisites

Add to the Prereqs section around line 8:

 ## Prereqs
 
 - Python 3.10+
 - Project installed in editable mode
+- Ollama installed (optional, required only for Test 4)
+  - Install from: https://ollama.ai/
+  - Pull a model: `ollama pull llama2`
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f01549 and 04b82e8.

📒 Files selected for processing (8)
  • CHANGELOG.md
  • cortex/ask.py
  • cortex/cli.py
  • cortex/llm/interpreter.py
  • docs/COMMANDS.md
  • docs/ISSUE-268-TESTING.md
  • tests/test_ask.py
  • tests/test_cli.py
💤 Files with no reviewable changes (4)
  • tests/test_ask.py
  • cortex/llm/interpreter.py
  • docs/COMMANDS.md
  • cortex/ask.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_cli.py
  • cortex/cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
🧬 Code graph analysis (2)
tests/test_cli.py (1)
cortex/cli.py (1)
  • install (314-560)
cortex/cli.py (1)
cortex/llm/interpreter.py (1)
  • CommandInterpreter (18-387)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Build Package
  • GitHub Check: Agent
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
🔇 Additional comments (3)
CHANGELOG.md (1)

22-26: LGTM! Clear breaking change documentation.

The changelog entry clearly documents the removal of the --offline flag and provides helpful migration guidance for users. The explanation of alternatives (semantic cache for cached requests and CORTEX_PROVIDER=ollama for true offline operation) aligns well with the PR objectives.

cortex/cli.py (1)

356-356: LGTM! Correct removal of offline parameter.

The instantiation of CommandInterpreter correctly omits the offline parameter, aligning with the updated signature in cortex/llm/interpreter.py. The change is minimal and maintains proper integration with the refactored offline behavior.

docs/ISSUE-268-TESTING.md (1)

43-55: LGTM! Clear cache verification workflow.

Test 3 provides a clear and actionable workflow for verifying cache hits on repeated requests. The expected outcomes are well-defined, and the addition of the explicit cortex cache stats step helps users confirm that caching is working correctly.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR removes the redundant --offline CLI flag, streamlining the offline functionality by relying on the existing semantic cache for cached requests and CORTEX_PROVIDER=ollama for true offline operation.

  • Removed --offline flag and related code from CLI, CommandInterpreter, and AskHandler
  • Updated documentation to guide users toward the semantic cache and Ollama provider for offline capabilities
  • Modified tests to reflect the removal of offline mode functionality

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated no comments.

Show a summary per file
File Description
cortex/cli.py Removed --offline argument parsing, self.offline attribute, and passing offline parameter to handlers
cortex/llm/interpreter.py Removed offline parameter from __init__ and the offline mode check that raised RuntimeError
cortex/ask.py Removed offline parameter from __init__ and the offline mode check that raised RuntimeError
tests/test_cli.py Updated test_install_no_api_key to mock Ollama unavailability instead of testing offline flag behavior
tests/test_ask.py Removed test_ask_offline_no_cache test case that validated offline mode error handling
docs/COMMANDS.md Removed --offline flag from global options documentation
docs/ISSUE-268-TESTING.md Updated testing guide to replace offline flag tests with semantic cache and Ollama provider examples
CHANGELOG.md Added breaking change notice documenting the removal and migration path

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@jaysurse
Copy link
Collaborator

jaysurse commented Dec 29, 2025

@Suyashd999
I reviewed and verified this PR and it fulfills the requirements of Issue #380. So, it looks good to merge.

@Suyashd999 Suyashd999 merged commit 65bb84d into cortexlinux:main Dec 30, 2025
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Remove --offline flag - redundant with existing offline capabilities

3 participants